Simultaneous linear-quadratic optimal control design via static output feedback

Author(s):  
James Lam ◽  
Yong-Yan Cao
Author(s):  
Verica Radisavljevic-Gajic

This paper is an overview of fundamental linear–quadratic optimal control techniques used for linear dynamic systems. The presentation is suitable for undergraduate and graduate students and practicing engineers. The paper can be used by class instructors as supplemental material for undergraduate and graduate control system courses. The paper shows how to find the solution to a dynamic optimization problem: optimize an integral quadratic performance criterion along trajectories of a linear dynamic system over an infinite time period (steady-state linear–quadratic optimal control problem). The solution is obtained by solving a static optimization problem. All derivations done in the paper require only elementary knowledge of linear algebra and state space linear system analysis. The results are presented also for the observer-driven linear–quadratic steady-state optimal controller, output feedback-based linear–quadratic optimal controller, and the Kalman filter-driven linear–quadratic stochastic optimal controller. Having full understanding of derivations of the linear–quadratic optimal controller, observer-driven linear–quadratic optimal controller, optimal linear–quadratic output feedback controller, and optimal linear–quadratic stochastic controller, students and engineers will feel confident to use these controllers in numerous engineering and scientific applications. Several optimal linear–quadratic control case studies involving models of real physical systems, with the corresponding Simulink block diagrams and MATLAB codes, are included in the paper.


2011 ◽  
Vol 383-390 ◽  
pp. 7258-7264 ◽  
Author(s):  
Zhao Yang Xu ◽  
Xiao Diao Huang

In this paper, based on linear quadratic optimal control design the controller of single inverted pendulum system, using the current epidemic method of Co-simulation to play each of the strengths of two software for simulation, Through two methods of the static and dynamic to observe and analyze the quality of feedback controller the based on linear quadratic optimal control.


Author(s):  
Andrea Pesare ◽  
Michele Palladino ◽  
Maurizio Falcone

AbstractIn this paper, we will deal with a linear quadratic optimal control problem with unknown dynamics. As a modeling assumption, we will suppose that the knowledge that an agent has on the current system is represented by a probability distribution $$\pi $$ π on the space of matrices. Furthermore, we will assume that such a probability measure is opportunely updated to take into account the increased experience that the agent obtains while exploring the environment, approximating with increasing accuracy the underlying dynamics. Under these assumptions, we will show that the optimal control obtained by solving the “average” linear quadratic optimal control problem with respect to a certain $$\pi $$ π converges to the optimal control driven related to the linear quadratic optimal control problem governed by the actual, underlying dynamics. This approach is closely related to model-based reinforcement learning algorithms where prior and posterior probability distributions describing the knowledge on the uncertain system are recursively updated. In the last section, we will show a numerical test that confirms the theoretical results.


Sign in / Sign up

Export Citation Format

Share Document